Dominance and Optimisation Based on Scale-Invariant Maximum Margin Preference Learning

نویسندگان

  • Mojtaba Montazery
  • Nic Wilson
چکیده

In the task of preference learning, there can be natural invariance properties that one might often expect a method to satisfy. These include (i) invariance to scaling of a pair of alternatives, e.g., replacing a pair (a,b) by (2a,2b); and (ii) invariance to rescaling of features across all alternatives. Maximum margin learning approaches satisfy such invariance properties for pairs of test vectors, but not for the preference input pairs, i.e., scaling the inputs in a different way could result in a different preference relation. In this paper we define and analyse more cautious preference relations that are invariant to the scaling of features, or inputs, or both simultaneously; this leads to computational methods for testing dominance with respect to the induced relations, and for generating optimal solutions among a set of alternatives. In our experiments, we compare the relations and their associated optimality sets based on their decisiveness, computation time and cardinality of the optimal set. We also discuss connections with imprecise probability.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Ratio-Based Multiple Kernel Clustering

Maximum margin clustering (MMC) approaches extend the large margin principle of SVM to unsupervised learning with considerable success. In this work, we utilize the ratio between the margin and the intra-cluster variance, to explicitly consider both the separation and the compactness of the clusters in the objective. Moreover, we employ multiple kernel learning (MKL) to jointly learn the kernel...

متن کامل

Descriptor Learning Using Convex Optimisation

The objective of this work is to learn descriptors suitable for the sparse feature detectors used in viewpoint invariant matching. We make a number of novel contributions towards this goal: first, it is shown that learning the pooling regions for the descriptor can be formulated as a convex optimisation problem selecting the regions using sparsity; second, it is shown that dimensionality reduct...

متن کامل

Language Learning Preference Styles of the Students of shiraz University of Medical Sciences

Introduction. The recent emphasis on learning strategies and their effects on success has promoted researchers from different disciplines to carry out research in this regard. Methods. This paper reports the results of the study performed on 41 students, randomly selected out of 120 students, in an attempt to find out their learning style preferences. It has replicated the study presented by ...

متن کامل

Data-driven rank ordering - a preference-based comparison study

Data driven rank ordering refers to the rank ordering of new data items based on the ordering inherent in existing data items. This is a challenging problem, which has received increasing attention in recent years in the machine learning community. Its applications include product recommendation, information retrieval, financial portfolio construction, and robotics. It is common to construct or...

متن کامل

Improving the Performance of Trajectory-based Multiobjective Optimisers by Using Relaxed Dominance

Several recent proposed techniques for multiobjective optimisation use the dominance relation to establish preference among solutions. In this paper, the Pareto archived evolutionary strategy and a population-based annealing algorithm are applied to test instances of a highly constrained combinatorial optimisation problem: academic space allocation. It is shown that the performance of both algo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017